Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Gary Marcus"


11 mentions found


Share Share Article via Facebook Share Article via Twitter Share Article via LinkedIn Share Article via EmailHere's why A.I. needs a six-month pause: NYU Professor Gary MarcusGary Marcus, New York University professor and Geometric Intelligence founder, joins 'Squawk on the Street' to discuss his thoughts on A.I. and why there are risks in this space.
March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Rather than pause research, she said, AI researchers should be subjected to greater transparency requirements. "If you do AI research, you should be very transparent about how you do it."
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI. Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters.
On Tuesday, Google announced it was bringing AI-powered chat technology to Gmail and Google Docs, letting it help composing emails or documents. On Thursday, Microsoft said that its popular business apps like Word and Excel would soon come bundled with ChatGPT-like technology dubbed Copilot. But this time, Microsoft is pitching the technology as being "usefully wrong." Microsoft chief scientist and technical fellow Jaime Teevan said that when Copilot "gets things wrong or has biases or is misused," Microsoft has "mitigations in place." "I studied AI for decades and I feel this huge sense of responsibility with this powerful new tool," Teevan said.
Dmitri Brereton said Bing's new AI chatbot "got some answers completely wrong" during its demo. As part of Microsoft's unveiling of the new tech, Bing's AI was asked to list the pros and cons of the three best-selling pet vacuums. "I hope Bing AI enjoys being sued for libel," he wrote. The AI arms race may lead to the spread of misinformationBrereton's observations come as Big Tech companies like Google and Microsoft enter an AI arms race. While Brereton told Insider that generative AI search engines like the new Bing can be "quite transformative," he noted that releasing it prematurely "could lead to big problems."
Maarten Bosma, an ex-Google Brain engineer, said in a tweet that Alphabet isn't taking AI seriously. Bosma's tweet comes after Google demoed its latest AI tools in Paris, including ChatGPT rival Bard. Some critics called Google's demo "frankly, bad" and a "disaster" amid a growing AI arms race. "I think the Google presentation signaled (rightfully or wrongly) that they are not really taking it seriously," Bosma tweeted. Microsoft' demo "was presented as a revolution," while Google's demo was presented as a "disaster," Marcus wrote.
A Google search executive who spoke to Insider on background said he definitely wasn't worried about the threat from ChatGPT. "What's unnerving is it looks as good and confident when it's wrong and when it's right," said Daniel Tunkelang, a search consultant and former employee on Google's search team. Google search is, of course, not immune to misinformation, and regularly surfaces links to pages that contain inaccuracies. But unlike Google search, which links to sources for information when it answers a question, an LLM runs into major problems there. Even in that situation, Google's search results would be well positioned.
Insights from a Professional Whistleblower
  + stars: | 2022-11-16 | by ( ) www.wsj.com   time to read: 1 min
AI: The Next FrontierArtificial intelligence is rapidly transforming life and business, raising questions about the future of jobs, security and society. Co-founder of Builder.ai, Sachin Dev Duggal and professor and author Gary Marcus, join the Journal's Deputy Editor in Chief Charles Forelle to discuss how business leaders should navigate this challenging new environment.
Persons: Builder.ai, Sachin Dev Duggal, Gary Marcus, Charles Forelle
Rethink Robotics was founded in 2008 with the idea that a cobot — a robot working alongside humans in ways that traditional automation couldn't (i.e. "There is no such thing as the cobot without Rod Brooks," Beane said. "No one needs a two-armed robot," Beane said, describing that design decision as "humans projecting onto a robot their own physical form." The opportunity for robotics technology remains significant, though it is still trailing other automation approaches in market penetration. "Manufacturing is about high throughput and high consistency and you can automate the heck out of it without a cobot," Beane said.
Chatbots: A long and complicated history
  + stars: | 2022-08-20 | by ( Catherine Thorbecke | ) edition.cnn.com   time to read: +11 min
Nearly 60 years later, the market is flooded with chatbots of varying quality and use cases from tech companies, banks, airlines and more. Others, meanwhile, warn the technology behind AI-powered chatbots remains much more limited than some people wish it may be. While they used similar technology to the earlier, social chatbots, Khudanpur said, “you really couldn’t call them chatbots. )”Return to social chatbots, and social problemsIn the early 2000s, researchers began to revisit the development of social chatbots that could carry an extended conversation with humans. This refrain would be repeated by other tech giants that released public chatbots, including Meta’s BlenderBot3, released earlier this month.
Total: 11